205 research outputs found

    Multi-copy and stochastic transformation of multipartite pure states

    Full text link
    Characterizing the transformation and classification of multipartite entangled states is a basic problem in quantum information. We study the problem under two most common environments, local operations and classical communications (LOCC), stochastic LOCC and two more general environments, multi-copy LOCC (MCLOCC) and multi-copy SLOCC (MCSLOCC). We show that two transformable multipartite states under LOCC or SLOCC are also transformable under MCLOCC and MCSLOCC. What's more, these two environments are equivalent in the sense that two transformable states under MCLOCC are also transformable under MCSLOCC, and vice versa. Based on these environments we classify the multipartite pure states into a few inequivalent sets and orbits, between which we build the partial order to decide their transformation. In particular, we investigate the structure of SLOCC-equivalent states in terms of tensor rank, which is known as the generalized Schmidt rank. Given the tensor rank, we show that GHZ states can be used to generate all states with a smaller or equivalent tensor rank under SLOCC, and all reduced separable states with a cardinality smaller or equivalent than the tensor rank under LOCC. Using these concepts, we extended the concept of "maximally entangled state" in the multi-partite system.Comment: 8 pages, 1 figure, revised version according to colleagues' comment

    Efficient algorithms for tensor scaling, quantum marginals and moment polytopes

    Full text link
    We present a polynomial time algorithm to approximately scale tensors of any format to arbitrary prescribed marginals (whenever possible). This unifies and generalizes a sequence of past works on matrix, operator and tensor scaling. Our algorithm provides an efficient weak membership oracle for the associated moment polytopes, an important family of implicitly-defined convex polytopes with exponentially many facets and a wide range of applications. These include the entanglement polytopes from quantum information theory (in particular, we obtain an efficient solution to the notorious one-body quantum marginal problem) and the Kronecker polytopes from representation theory (which capture the asymptotic support of Kronecker coefficients). Our algorithm can be applied to succinct descriptions of the input tensor whenever the marginals can be efficiently computed, as in the important case of matrix product states or tensor-train decompositions, widely used in computational physics and numerical mathematics. We strengthen and generalize the alternating minimization approach of previous papers by introducing the theory of highest weight vectors from representation theory into the numerical optimization framework. We show that highest weight vectors are natural potential functions for scaling algorithms and prove new bounds on their evaluations to obtain polynomial-time convergence. Our techniques are general and we believe that they will be instrumental to obtain efficient algorithms for moment polytopes beyond the ones consider here, and more broadly, for other optimization problems possessing natural symmetries

    Effective constructions in plethysms and Weintraub's conjecture

    Full text link
    We give a short proof of Weintraub's conjecture by constructing explicit highest weight vectors in the symmetric power of an even exterior power

    Log-concavity and lower bounds for arithmetic circuits

    Get PDF
    One question that we investigate in this paper is, how can we build log-concave polynomials using sparse polynomials as building blocks? More precisely, let f=_i=0da_iXiR+[X]f = \sum\_{i = 0}^d a\_i X^i \in \mathbb{R}^+[X] be a polynomial satisfying the log-concavity condition a\_i^2 \textgreater{} \tau a\_{i-1}a\_{i+1} for every i{1,,d1},i \in \{1,\ldots,d-1\}, where \tau \textgreater{} 0. Whenever ff can be written under the form f=_i=1k_j=1mf_i,jf = \sum\_{i = 1}^k \prod\_{j = 1}^m f\_{i,j} where the polynomials f_i,jf\_{i,j} have at most tt monomials, it is clear that dktmd \leq k t^m. Assuming that the f_i,jf\_{i,j} have only non-negative coefficients, we improve this degree bound to d=O(km2/3t2m/3log2/3(kt))d = \mathcal O(k m^{2/3} t^{2m/3} {\rm log^{2/3}}(kt)) if \tau \textgreater{} 1, and to dkmtd \leq kmt if τ=d2d\tau = d^{2d}. This investigation has a complexity-theoretic motivation: we show that a suitable strengthening of the above results would imply a separation of the algebraic complexity classes VP and VNP. As they currently stand, these results are strong enough to provide a new example of a family of polynomials in VNP which cannot be computed by monotone arithmetic circuits of polynomial size

    Alternating Minimization, Scaling Algorithms, and the Null-Cone Problem from Invariant Theory

    Get PDF
    Alternating minimization heuristics seek to solve a (difficult) global optimization task through iteratively solving a sequence of (much easier) local optimization tasks on different parts (or blocks) of the input parameters. While popular and widely applicable, very few examples of this heuristic are rigorously shown to converge to optimality, and even fewer to do so efficiently. In this paper we present a general framework which is amenable to rigorous analysis, and expose its applicability. Its main feature is that the local optimization domains are each a group of invertible matrices, together naturally acting on tensors, and the optimization problem is minimizing the norm of an input tensor under this joint action. The solution of this optimization problem captures a basic problem in Invariant Theory, called the null-cone problem. This algebraic framework turns out to encompass natural computational problems in combinatorial optimization, algebra, analysis, quantum information theory, and geometric complexity theory. It includes and extends to high dimensions the recent advances on (2-dimensional) operator scaling. Our main result is a fully polynomial time approximation scheme for this general problem, which may be viewed as a multi-dimensional scaling algorithm. This directly leads to progress on some of the problems in the areas above, and a unified view of others. We explain how faster convergence of an algorithm for the same problem will allow resolving central open problems. Our main techniques come from Invariant Theory, and include its rich non-commutative duality theory, and new bounds on the bitsizes of coefficients of invariant polynomials. They enrich the algorithmic toolbox of this very computational field of mathematics, and are directly related to some challenges in geometric complexity theory (GCT)

    Polynomial Time Algorithms in Invariant Theory for Torus Actions

    Get PDF
    An action of a group on a vector space partitions the latter into a set of orbits. We consider three natural and useful algorithmic "isomorphism" or "classification" problems, namely, orbit equality, orbit closure intersection, and orbit closure containment. These capture and relate to a variety of problems within mathematics, physics and computer science, optimization and statistics. These orbit problems extend the more basic null cone problem, whose algorithmic complexity has seen significant progress in recent years. In this paper, we initiate a study of these problems by focusing on the actions of commutative groups (namely, tori). We explain how this setting is motivated from questions in algebraic complexity, and is still rich enough to capture interesting combinatorial algorithmic problems. While the structural theory of commutative actions is well understood, no general efficient algorithms were known for the aforementioned problems. Our main results are polynomial time algorithms for all three problems. We also show how to efficiently find separating invariants for orbits, and how to compute systems of generating rational invariants for these actions (in contrast, for polynomial invariants the latter is known to be hard). Our techniques are based on a combination of fundamental results in invariant theory, linear programming, and algorithmic lattice theory

    Balancing Bounded Treewidth Circuits

    Full text link
    Algorithmic tools for graphs of small treewidth are used to address questions in complexity theory. For both arithmetic and Boolean circuits, it is shown that any circuit of size nO(1)n^{O(1)} and treewidth O(login)O(\log^i n) can be simulated by a circuit of width O(logi+1n)O(\log^{i+1} n) and size ncn^c, where c=O(1)c = O(1), if i=0i=0, and c=O(loglogn)c=O(\log \log n) otherwise. For our main construction, we prove that multiplicatively disjoint arithmetic circuits of size nO(1)n^{O(1)} and treewidth kk can be simulated by bounded fan-in arithmetic formulas of depth O(k2logn)O(k^2\log n). From this we derive the analogous statement for syntactically multilinear arithmetic circuits, which strengthens a theorem of Mahajan and Rao. As another application, we derive that constant width arithmetic circuits of size nO(1)n^{O(1)} can be balanced to depth O(logn)O(\log n), provided certain restrictions are made on the use of iterated multiplication. Also from our main construction, we derive that Boolean bounded fan-in circuits of size nO(1)n^{O(1)} and treewidth kk can be simulated by bounded fan-in formulas of depth O(k2logn)O(k^2\log n). This strengthens in the non-uniform setting the known inclusion that SC0NC1SC^0 \subseteq NC^1. Finally, we apply our construction to show that {\sc reachability} for directed graphs of bounded treewidth is in LogDCFLLogDCFL

    Numerically stable computation of CreditRisk+

    Get PDF
    The CreditRisk+ model launched by CSFB in 1997 is widely used by practitioners in the banking sector as a simple means for the quantification of credit risk, primarily of the loan book. We present an alternative numerical recursion scheme for CreditRisk+, equivalent to an algorithm recently proposed by Giese, based on well-known expansions of the logarithm and the exponential of a power series. We show that it is advantageous to the Panjer recursion advocated in the original CreditRisk+ document, in that it is numerically stable. The crucial stability arguments are explained in detail. Furthermore, the computational complexity of the resulting algorithm is stated
    corecore